Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery. [https://machinelearningmastery.com/]
SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The Kaggle ASL Alphabet Images dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.
INTRODUCTION: The data set is a collection of alphabets from the American Sign Language, separated into 29 folders representing the various classes. The training data set contains 87,000 images which are 200x200 pixels. There are 29 classes, of which 26 are for the letters A-Z and three labels for SPACE, DELETE, and NOTHING. The test data set contains only 28 images to encourage the use of real-world test images.
In this Take4 iteration, we will construct a CNN model based on the ResNet152V2 architecture to predict the ASL alphabet letters based on the available images.
ANALYSIS: In this Take4 iteration, the ResNet152V2 model's performance achieved an accuracy score of 99.83% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 95.71%. Finally, the final model processed the test dataset with an accuracy score of 100%.
CONCLUSION: In this iteration, the ResNet152V2-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.
Dataset Used: Kaggle ASL Alphabet Images
Dataset ML Model: Multi-class image classification with numerical attributes
Dataset Reference: https://www.kaggle.com/grassknoted/asl-alphabet
One potential source of performance benchmarks: https://www.kaggle.com/grassknoted/asl-alphabet/code
A deep-learning image classification project generally can be broken down into five major tasks:
# Install the packages to support accessing environment variable and SQL databases
# !pip install python-dotenv PyMySQL boto3
# Retrieve GPU configuration information from Colab
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime → "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
Fri Aug 13 23:52:35 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 47C P0 42W / 250W | 0MiB / 16280MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
# Retrieve memory configuration information from Colab
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
if ram_gb < 20:
print('To enable a high-RAM runtime, select the Runtime → "Change runtime type"')
print('menu, and then select High-RAM in the Runtime shape dropdown. Then, ')
print('re-execute this cell.')
else:
print('You are using a high-RAM runtime!')
Your runtime has 13.6 gigabytes of available RAM To enable a high-RAM runtime, select the Runtime → "Change runtime type" menu, and then select High-RAM in the Runtime shape dropdown. Then, re-execute this cell.
# Retrieve CPU information from the system
ncpu = !nproc
print("The number of available CPUs is:", ncpu[0])
The number of available CPUs is: 2
# Set the random seed number for reproducible results
RNG_SEED = 888
# Load libraries and packages
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import sys
from datetime import datetime
import zipfile
import h5py
# import boto3
# from dotenv import load_dotenv
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ReduceLROnPlateau
# Begin the timer for the script processing
start_time_script = datetime.now()
# Set up the number of CPU cores available for multi-thread processing
N_JOBS = 1
# Set up the flag to stop sending progress emails (setting to True will send status emails!)
NOTIFY_STATUS = False
# Set the percentage sizes for splitting the dataset
VAL_SET_RATIO = 0.2
# TEST_SET_RATIO = 0.5
# Set various default modeling parameters
DEFAULT_LOSS = 'categorical_crossentropy'
DEFAULT_METRICS = ['accuracy']
DEFAULT_OPTIMIZER = tf.keras.optimizers.Adam(learning_rate=0.0001)
DEFAULT_INITIALIZER = tf.keras.initializers.RandomNormal(seed=RNG_SEED)
CLASSIFIER_ACTIVATION = 'softmax'
MAX_EPOCHS = 10
BATCH_SIZE = 32
# RAW_IMAGE_SIZE = (100, 100)
TARGET_IMAGE_SIZE = (224, 224)
INPUT_IMAGE_SHAPE = (TARGET_IMAGE_SIZE[0], TARGET_IMAGE_SIZE[1], 3)
NUM_CLASSES = 29
CLASS_LABELS = ['A','B','C','D','E',
'F','G','H','I','J',
'K','L','M','N','O',
'P','Q','R','S','T',
'U','V','W','X','Y',
'Z','del','nothing','space']
# CLASS_NAMES = []
# Define the labels to use for graphing the data
train_metric = "accuracy"
validation_metric = "val_accuracy"
train_loss = "loss"
validation_loss = "val_loss"
# Define the directory locations and file names
STAGING_DIR = 'staging/'
TRAIN_DIR = 'staging/asl_alphabet_train/asl_alphabet_train/'
# VALID_DIR = ''
TEST_DIR = 'staging/asl_alphabet_test/asl_alphabet_test/'
# TRAIN_DATASET = ''
# VALID_DATASET = ''
# TEST_DATASET = ''
# TRAIN_LABELS = ''
# VALID_LABELS = ''
# TEST_LABELS = ''
# OUTPUT_DIR = 'staging/'
# SAMPLE_SUBMISSION_CSV = 'sample_submission.csv'
# FINAL_SUBMISSION_CSV = 'submission.csv'
# Check the number of GPUs accessible through TensorFlow
print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))
# Print out the TensorFlow version for confirmation
print('TensorFlow version:', tf.__version__)
Num GPUs Available: 1 TensorFlow version: 2.5.0
# Set up the email notification function
def status_notify(msg_text):
access_key = os.environ.get('SNS_ACCESS_KEY')
secret_key = os.environ.get('SNS_SECRET_KEY')
aws_region = os.environ.get('SNS_AWS_REGION')
topic_arn = os.environ.get('SNS_TOPIC_ARN')
if (access_key is None) or (secret_key is None) or (aws_region is None):
sys.exit("Incomplete notification setup info. Script Processing Aborted!!!")
sns = boto3.client('sns', aws_access_key_id=access_key, aws_secret_access_key=secret_key, region_name=aws_region)
response = sns.publish(TopicArn=topic_arn, Message=msg_text)
if response['ResponseMetadata']['HTTPStatusCode'] != 200 :
print('Status notification not OK with HTTP status code:', response['ResponseMetadata']['HTTPStatusCode'])
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 1 - Prepare Environment has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Reset the random number generators
def reset_random(x=RNG_SEED):
random.seed(x)
np.random.seed(x)
tf.random.set_seed(x)
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 1 - Prepare Environment completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 2 - Load and Prepare Images has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
!rm -rf staging/
!mkdir staging/
# !rm archive_asl_alphabet.zip
if not os.path.exists('archive_asl_alphabet.zip'):
!wget https://dainesanalytics.com/datasets/kaggle-asl-alphabet-images/archive_asl_alphabet.zip
--2021-08-13 23:52:41-- https://dainesanalytics.com/datasets/kaggle-asl-alphabet-images/archive_asl_alphabet.zip Resolving dainesanalytics.com (dainesanalytics.com)... 13.226.50.65, 13.226.50.69, 13.226.50.115, ... Connecting to dainesanalytics.com (dainesanalytics.com)|13.226.50.65|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1100887034 (1.0G) [application/zip] Saving to: ‘archive_asl_alphabet.zip’ archive_asl_alphabe 100%[===================>] 1.02G 82.4MB/s in 13s 2021-08-13 23:52:54 (80.7 MB/s) - ‘archive_asl_alphabet.zip’ saved [1100887034/1100887034]
dataset_zip = 'archive_asl_alphabet.zip'
zip_ref = zipfile.ZipFile(dataset_zip, 'r')
zip_ref.extractall(STAGING_DIR)
zip_ref.close()
# Brief listing of training image files for each class
for c_label in CLASS_LABELS:
training_class_dir = os.path.join(TRAIN_DIR, c_label)
training_class_files = os.listdir(training_class_dir)
print('Number of training images for', c_label, ':', len(os.listdir(training_class_dir)))
print('Training samples for', c_label, ':', training_class_files[:5])
Number of training images for A : 3000 Training samples for A : ['A532.jpg', 'A2176.jpg', 'A1678.jpg', 'A1322.jpg', 'A438.jpg'] Number of training images for B : 3000 Training samples for B : ['B1615.jpg', 'B2653.jpg', 'B1797.jpg', 'B1520.jpg', 'B255.jpg'] Number of training images for C : 3000 Training samples for C : ['C702.jpg', 'C2563.jpg', 'C422.jpg', 'C2064.jpg', 'C2643.jpg'] Number of training images for D : 3000 Training samples for D : ['D147.jpg', 'D347.jpg', 'D1593.jpg', 'D2072.jpg', 'D1625.jpg'] Number of training images for E : 3000 Training samples for E : ['E744.jpg', 'E541.jpg', 'E656.jpg', 'E71.jpg', 'E2182.jpg'] Number of training images for F : 3000 Training samples for F : ['F2650.jpg', 'F1390.jpg', 'F414.jpg', 'F1251.jpg', 'F2991.jpg'] Number of training images for G : 3000 Training samples for G : ['G2503.jpg', 'G1990.jpg', 'G2665.jpg', 'G749.jpg', 'G2765.jpg'] Number of training images for H : 3000 Training samples for H : ['H797.jpg', 'H1065.jpg', 'H811.jpg', 'H614.jpg', 'H1381.jpg'] Number of training images for I : 3000 Training samples for I : ['I1950.jpg', 'I2882.jpg', 'I556.jpg', 'I90.jpg', 'I2282.jpg'] Number of training images for J : 3000 Training samples for J : ['J2044.jpg', 'J1530.jpg', 'J2192.jpg', 'J1861.jpg', 'J418.jpg'] Number of training images for K : 3000 Training samples for K : ['K481.jpg', 'K1340.jpg', 'K2448.jpg', 'K2666.jpg', 'K2237.jpg'] Number of training images for L : 3000 Training samples for L : ['L746.jpg', 'L777.jpg', 'L2427.jpg', 'L304.jpg', 'L590.jpg'] Number of training images for M : 3000 Training samples for M : ['M2721.jpg', 'M1448.jpg', 'M2192.jpg', 'M2859.jpg', 'M2029.jpg'] Number of training images for N : 3000 Training samples for N : ['N35.jpg', 'N2484.jpg', 'N2521.jpg', 'N2623.jpg', 'N1605.jpg'] Number of training images for O : 3000 Training samples for O : ['O2463.jpg', 'O2644.jpg', 'O597.jpg', 'O595.jpg', 'O2244.jpg'] Number of training images for P : 3000 Training samples for P : ['P2526.jpg', 'P984.jpg', 'P1333.jpg', 'P400.jpg', 'P1835.jpg'] Number of training images for Q : 3000 Training samples for Q : ['Q2363.jpg', 'Q1341.jpg', 'Q739.jpg', 'Q2658.jpg', 'Q398.jpg'] Number of training images for R : 3000 Training samples for R : ['R2124.jpg', 'R371.jpg', 'R386.jpg', 'R713.jpg', 'R2656.jpg'] Number of training images for S : 3000 Training samples for S : ['S1548.jpg', 'S1181.jpg', 'S2133.jpg', 'S2443.jpg', 'S208.jpg'] Number of training images for T : 3000 Training samples for T : ['T591.jpg', 'T1559.jpg', 'T1273.jpg', 'T2197.jpg', 'T2480.jpg'] Number of training images for U : 3000 Training samples for U : ['U1341.jpg', 'U321.jpg', 'U1415.jpg', 'U582.jpg', 'U721.jpg'] Number of training images for V : 3000 Training samples for V : ['V808.jpg', 'V757.jpg', 'V213.jpg', 'V1296.jpg', 'V308.jpg'] Number of training images for W : 3000 Training samples for W : ['W1297.jpg', 'W545.jpg', 'W1813.jpg', 'W347.jpg', 'W1693.jpg'] Number of training images for X : 3000 Training samples for X : ['X1460.jpg', 'X1362.jpg', 'X2758.jpg', 'X690.jpg', 'X958.jpg'] Number of training images for Y : 3000 Training samples for Y : ['Y1571.jpg', 'Y1194.jpg', 'Y1431.jpg', 'Y2141.jpg', 'Y2462.jpg'] Number of training images for Z : 3000 Training samples for Z : ['Z1730.jpg', 'Z442.jpg', 'Z643.jpg', 'Z2009.jpg', 'Z1798.jpg'] Number of training images for del : 3000 Training samples for del : ['del2527.jpg', 'del293.jpg', 'del2419.jpg', 'del1386.jpg', 'del1062.jpg'] Number of training images for nothing : 3000 Training samples for nothing : ['nothing1701.jpg', 'nothing2141.jpg', 'nothing1171.jpg', 'nothing516.jpg', 'nothing1000.jpg'] Number of training images for space : 3000 Training samples for space : ['space1223.jpg', 'space1708.jpg', 'space2250.jpg', 'space1050.jpg', 'space1217.jpg']
# Plot some training images from the dataset
nrows = NUM_CLASSES
ncols = 4
training_examples = []
example_labels = []
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 3)
for c_label in CLASS_LABELS:
training_class_dir = os.path.join(TRAIN_DIR, c_label)
training_class_files = os.listdir(training_class_dir)
for j in range(ncols):
training_examples.append(c_label+'/'+training_class_files[j])
example_labels.append(c_label)
# print(training_examples)
# print(example_labels)
for i, img_path in enumerate(training_examples):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i+1)
sp.text(0, 0, example_labels[i])
# sp.axis('Off')
img = mpimg.imread(TRAIN_DIR + img_path)
plt.imshow(img)
plt.show()
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 2 - Load and Prepare Images completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 3 - Define and Train Models has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
datagen_kwargs = dict(rescale=1./255, validation_split=VAL_SET_RATIO)
training_datagen = ImageDataGenerator(**datagen_kwargs)
validation_datagen = ImageDataGenerator(**datagen_kwargs)
dataflow_kwargs = dict(class_mode="categorical")
do_data_augmentation = False
if do_data_augmentation:
training_datagen = ImageDataGenerator(rotation_range=90,
horizontal_flip=True,
vertical_flip=True,
**datagen_kwargs)
print('Loading and pre-processing the training images...')
training_generator = training_datagen.flow_from_directory(directory=TRAIN_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=True,
seed=RNG_SEED,
subset="training",
**dataflow_kwargs)
print('Number of training image batches per epoch of modeling:', len(training_generator))
print('Loading and pre-processing the validation images...')
validation_generator = validation_datagen.flow_from_directory(directory=TRAIN_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
subset="validation",
**dataflow_kwargs)
print('Number of validation image batches per epoch of modeling:', len(validation_generator))
Loading and pre-processing the training images... Found 69600 images belonging to 29 classes. Number of training image batches per epoch of modeling: 2175 Loading and pre-processing the validation images... Found 17400 images belonging to 29 classes. Number of validation image batches per epoch of modeling: 544
# Define the function for plotting training results for comparison
def plot_metrics(history):
fig, axs = plt.subplots(1, 2, figsize=(24, 15))
metrics = [train_loss, train_metric]
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color='blue', label='Train')
plt.plot(history.epoch, history.history['val_'+metric], color='red', linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == train_loss:
plt.ylim([0, plt.ylim()[1]])
else:
plt.ylim([0.5, 1.1])
plt.legend()
# Define the baseline model for benchmarking
def create_nn_model(input_param=INPUT_IMAGE_SHAPE, output_param=NUM_CLASSES, dense_nodes=2048,
init_param=DEFAULT_INITIALIZER, classifier_activation=CLASSIFIER_ACTIVATION,
loss_param=DEFAULT_LOSS, opt_param=DEFAULT_OPTIMIZER, metrics_param=DEFAULT_METRICS):
base_model = keras.applications.resnet_v2.ResNet152V2(include_top=False, weights='imagenet', input_shape=input_param)
nn_model = keras.models.Sequential()
nn_model.add(base_model)
nn_model.add(keras.layers.Flatten())
nn_model.add(keras.layers.Dense(dense_nodes, activation='relu', kernel_initializer=init_param)),
nn_model.add(keras.layers.Dense(output_param, activation=classifier_activation))
nn_model.compile(loss=loss_param, optimizer=opt_param, metrics=metrics_param)
return nn_model
# Initialize the neural network model and get the training results for plotting graph
start_time_module = datetime.now()
# learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience=3, verbose=1, factor=0.5, min_lr=0.00001)
reset_random()
nn_model_0 = create_nn_model()
nn_model_history = nn_model_0.fit(training_generator,
epochs=MAX_EPOCHS,
validation_data=validation_generator,
# callbacks=[learning_rate_reduction],
verbose=1)
print('Total time for model fitting:', (datetime.now() - start_time_module))
Epoch 1/10 2175/2175 [==============================] - 1013s 456ms/step - loss: 0.5759 - accuracy: 0.9411 - val_loss: 0.6695 - val_accuracy: 0.9028 Epoch 2/10 2175/2175 [==============================] - 985s 453ms/step - loss: 0.0954 - accuracy: 0.9867 - val_loss: 0.5329 - val_accuracy: 0.9352 Epoch 3/10 2175/2175 [==============================] - 984s 453ms/step - loss: 0.0561 - accuracy: 0.9917 - val_loss: 0.4141 - val_accuracy: 0.9384 Epoch 4/10 2175/2175 [==============================] - 984s 452ms/step - loss: 0.0572 - accuracy: 0.9913 - val_loss: 0.3214 - val_accuracy: 0.9379 Epoch 5/10 2175/2175 [==============================] - 985s 453ms/step - loss: 0.0388 - accuracy: 0.9931 - val_loss: 1.3317 - val_accuracy: 0.7868 Epoch 6/10 2175/2175 [==============================] - 983s 452ms/step - loss: 0.0348 - accuracy: 0.9934 - val_loss: 0.4371 - val_accuracy: 0.9269 Epoch 7/10 2175/2175 [==============================] - 983s 452ms/step - loss: 0.0236 - accuracy: 0.9959 - val_loss: 1.4571 - val_accuracy: 0.8356 Epoch 8/10 2175/2175 [==============================] - 983s 452ms/step - loss: 0.0179 - accuracy: 0.9967 - val_loss: 0.2768 - val_accuracy: 0.9526 Epoch 9/10 2175/2175 [==============================] - 983s 452ms/step - loss: 0.0210 - accuracy: 0.9965 - val_loss: 0.4225 - val_accuracy: 0.9325 Epoch 10/10 2175/2175 [==============================] - 982s 452ms/step - loss: 0.0088 - accuracy: 0.9983 - val_loss: 0.2418 - val_accuracy: 0.9571 Total time for model fitting: 2:44:33.897651
nn_model_0.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet152v2 (Functional) (None, 7, 7, 2048) 58331648 _________________________________________________________________ flatten (Flatten) (None, 100352) 0 _________________________________________________________________ dense (Dense) (None, 2048) 205522944 _________________________________________________________________ dense_1 (Dense) (None, 29) 59421 ================================================================= Total params: 263,914,013 Trainable params: 263,770,269 Non-trainable params: 143,744 _________________________________________________________________
plot_metrics(nn_model_history)
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 3 - Define and Train Models completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 4 - Evaluate and Optimize Models has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Not applicable for this iteration of modeling
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 4 - Evaluate and Optimize Models completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 5 - Finalize Model and Make Predictions has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
final_model = nn_model_0
final_model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet152v2 (Functional) (None, 7, 7, 2048) 58331648 _________________________________________________________________ flatten (Flatten) (None, 100352) 0 _________________________________________________________________ dense (Dense) (None, 2048) 205522944 _________________________________________________________________ dense_1 (Dense) (None, 29) 59421 ================================================================= Total params: 263,914,013 Trainable params: 263,770,269 Non-trainable params: 143,744 _________________________________________________________________
testing_class_files = os.listdir(TEST_DIR)
print('Number of test images found:', len(testing_class_files))
Number of test images found: 28
test_images_df = pd.DataFrame(columns=['image_name','class_label'])
for image_file in testing_class_files:
image_name = image_file
class_label = image_name[0:image_file.find('_test')]
# print('Found image:', image_name, 'with the class:', class_label)
df_record = {'image_name': image_name,
'class_label': class_label}
test_images_df = test_images_df.append(df_record, ignore_index=True)
print(test_images_df.head())
image_name class_label 0 P_test.jpg P 1 Y_test.jpg Y 2 V_test.jpg V 3 C_test.jpg C 4 B_test.jpg B
print('Loading and pre-processing the testing images...')
testing_datagen = ImageDataGenerator(**datagen_kwargs)
testing_generator = testing_datagen.flow_from_dataframe(dataframe=test_images_df,
directory=TEST_DIR,
x_col='image_name',
y_col='class_label',
classes=CLASS_LABELS,
target_size=TARGET_IMAGE_SIZE,
shuffle=False,
**dataflow_kwargs)
print('Number of image batches per epoch of modeling:', len(testing_generator))
Loading and pre-processing the testing images... Found 28 validated image filenames belonging to 29 classes. Number of image batches per epoch of modeling: 1
final_model.evaluate(testing_generator, verbose=1)
1/1 [==============================] - 0s 462ms/step - loss: 0.0000e+00 - accuracy: 1.0000
[0.0, 1.0]
test_predictions = np.argmax(final_model.predict(testing_generator), axis=-1)
test_originals = testing_generator.labels
print('Accuracy Score:', accuracy_score(test_originals, test_predictions))
print(confusion_matrix(test_originals, test_predictions))
print(classification_report(test_originals, test_predictions))
Accuracy Score: 1.0
[[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]]
precision recall f1-score support
0 1.00 1.00 1.00 1
1 1.00 1.00 1.00 1
2 1.00 1.00 1.00 1
3 1.00 1.00 1.00 1
4 1.00 1.00 1.00 1
5 1.00 1.00 1.00 1
6 1.00 1.00 1.00 1
7 1.00 1.00 1.00 1
8 1.00 1.00 1.00 1
9 1.00 1.00 1.00 1
10 1.00 1.00 1.00 1
11 1.00 1.00 1.00 1
12 1.00 1.00 1.00 1
13 1.00 1.00 1.00 1
14 1.00 1.00 1.00 1
15 1.00 1.00 1.00 1
16 1.00 1.00 1.00 1
17 1.00 1.00 1.00 1
18 1.00 1.00 1.00 1
19 1.00 1.00 1.00 1
20 1.00 1.00 1.00 1
21 1.00 1.00 1.00 1
22 1.00 1.00 1.00 1
23 1.00 1.00 1.00 1
24 1.00 1.00 1.00 1
25 1.00 1.00 1.00 1
27 1.00 1.00 1.00 1
28 1.00 1.00 1.00 1
accuracy 1.00 28
macro avg 1.00 1.00 1.00 28
weighted avg 1.00 1.00 1.00 28
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 5 - Finalize Model and Make Predictions completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
print ('Total time for the script:',(datetime.now() - start_time_script))
Total time for the script: 2:45:43.299540